In feature learning process, the existing hashing methods cannot distinguish the importance of the feature information of each region, and cannot utilize the label information to explore the correlation between modalities. Therefore, an Adaptive Hybrid Attention Hashing for deep cross-modal retrieval (AHAH) model was proposed. Firstly, channel attention and spatial attention were combined by the weights obtained by autonomous learning to strengthen the attention to the relevant target area and weaken the attention to the irrelevant target area. Secondly, the similarity between modalities was expressed more finely through the statistical analysis of modality labels and quantification of similarity degrees to numbers between 0 and 1 by using the proposed similarity measurement method. Compared with the most advanced method Multi-Label Semantics Preserving Hashing (MLSPH) on four commonly used datasets MIRFLICKR-25K, NUS-WIDE, MSCOCO, and IAPR TC-12, when the hash code length is 16 bit, the proposed method has the retrieval mean Average Precision (mAP) increased by 2.25%, 1.75%, 6.8%, and 2.15%, respectively. In addition, ablation experiments and efficiency analysis also prove the effectiveness of the proposed method.